EC Library Guide on artificial intelligence, algorithms and the risk of discrimination: Selected publications
Selected EU publications
- 2024 report on gender equality in the EU
European Commission, Directorate-General for Justice and Consumers, Publications Office of the European Union, 2024.
Gender equality is a core EU value under the Treaties, with the EU Gender Equality Strategy 2020-2025 delivering on the von der Leyen Commission’s commitment to achieving a Union of Equality. The fourth year of the EU’s Gender Equality Strategy, 2023, was another successful year for gender equality in the EU. 2023 was a particularly important year for gender equality in the EU. In May, the Pay Transparency Directive was adopted. It aims to strengthen the principle of equal pay for equal work or work of equal value, through pay transparency measures and enforcement mechanisms. The Directive tackles pay secrecy, making it easier for all workers to find out if their gender was a factor in setting their salary and to defend their right to equal pay in court.
In June, the EU’s accession process to the Council of Europe Convention on Preventing and Combating Violence Against Women and Domestic Violence (commonly known as the Istanbul Convention) was finally concluded. With the accession, the EU is bound by ambitious and comprehensive standards to prevent and combat violence against women and domestic violence in the area of judicial cooperation in criminal matters, asylum and non-refoulement and with regard to its institutions and public administration. In December, the Council of the EU reached a provisional agreement with the European Parliament on a new EU directive to strengthen the independence and powers of the national equality bodies. The directive covers the mandate of equality bodies in the area of gender equality in the area of employment and occupation. Most recently, a political agreement has been reached on the proposal for a directive of the European Parliament and of the Council on combating violence against women and domestic violence. The proposed directive is a milestone - the first comprehensive legal instrument at EU level to tackle violence against women, which is still too pervasive in the European Union. With this directive all victims of violence against women and domestic violence across the European Union will benefit from the same comprehensive set of measures of protection, support and access to justice. The report focuses on the key actions and achievements of EU institutions in this area. It also provides encouraging examples of legislative and policy developments by Member States (indicated in the boxes), and work by EU-funded projects in the above areas.
- Addressing AI risks in the workplace: Workers and algorithms
European Parliament briefing, 2024.
lgorithms and artificial intelligence (AI) are changing the way people live and work. Depending on how AI technologies are used and what purpose they serve, they can drive progress and benefit the whole of society, but they also raise ethical concerns and may cause harm. When introduced to the world of work, their transformative potential runs into complex national and EU rules. Existing labour laws, put in place before AI systems came on the scene, do not appear fit to provide meaningful guiderails. As with any new technologies, tensions arise between two opposing regulatory approaches: strict regulation to safeguard society from potential hazards and minimum regulation to promote the technology's deployment and innovation.
For employers who invest in AI systems, the main motivation is better workplace organisation, increased productivity, and competitiveness. Workers, on the other hand, may fear losing their jobs, and also want to have a say in how AI and algorithms are to become part of their daily lives. Focusing on workplace deployment of AI, this briefing looks at the state of play of algorithmic management in the workplace and some issues relating to the data that algorithms use and generate. It offers an overview of the current top-down EU legislative approach, of insights brought by the European Parliament, and of advances in collective bargaining, demonstrating the potential of a bottom-up approach to complement AI deployment. The briefing looks at the potential use of sleeping clauses in the existing EU legal framework and – taking note of the views of both employers and trade unions – highlights the many open questions that remain.
- Addressing racism in policing
European Union Agency for Fundamental Rights, Publications Office of the European Union, 2024.
Racism in the police ranges from discriminatory racial profiling practices to excessive use of force. Repetitive incidents like these highlight the deeper structural issues that need to be uprooted from policing across the EU. Everyone in society is affected by racism in policing, not only the individuals or communities targeted. Lack of trust in policing fuels social exclusion and damages the foundations of a fair and equal society. This is the first EU-wide report on racism in policing. FRA’s findings identify gaps in regulatory frameworks and proposes concrete steps for action.
EU countries should ensure that their police forces comply with anti-racism provisions in EU and international law. Member States should collect data on racist incidents. They should enable whistle-blowers to report misconduct without negative consequences and ensure independent oversight. Police forces should be more diverse to represent the communities they serve. They should receive more guidance to prevent racism in their work. Through this report, FRA supports EU countries to make a decisive effort in tackling racism in policing.
- Adopt AI study – Final study report
European Commission, Directorate-General for Communications Networks, Content and Technology (CNECT), 2024.
A study commissioned by the European Commission highlights the significant potential of Artificial Intelligence (AI) to improve public sector services across the EU. The report emphasizes that AI can enhance citizen-government interactions, boost analytical capabilities, and increase efficiency in key areas such as healthcare, mobility, e-Government, and education. These sectors are identified as among the most ready for large-scale AI deployment, with applications ranging from autonomous vehicles and smart traffic systems to AI-driven healthcare solutions and education technologies.
However, the study also outlines several challenges hindering AI uptake in the public sector. These include complex public procurement processes, difficulties in data management, a lack of regulatory clarity, and concerns about bias in AI decision-making. In response, the report provides a series of policy recommendations aimed at accelerating AI adoption. These include increasing funding and resources for AI in public services, ensuring transparency and accountability in AI systems, promoting cross-border data sharing, and aligning industry and public sector expectations. The European Commission is advised to create a clear regulatory framework for AI, prioritise long-term implementation, and foster human-centric, trustworthy AI solutions. By addressing these challenges, the EU aims to position itself as a global leader in the development of trustworthy and sustainable AI technologies for the public sector.
- AI report – By the European Digital Education Hub’s Squad on artificial intelligence in education
European Commission, European Education and Culture Executive Agency, Publications Office of the European Union, 2023.
We have seen in the previous discussion and scenarios that AI has the potential to deliver great benefits for education. However, we have also seen that there are also risks associated with its use. In many cases, we may determine that these are minimal risk. Examples we’ve discussed include the provision of formative feedback, help for teachers in creating lesson plans, and assistance in some of the administrative functions of schools. As we move away from the use of AI as a support system, so the risk increases. As we have seen, using AI for learning analytics may help teachers adjust their teaching strategies to cater to individual needs. However, using learning analytics without adequate teacher oversight may disadvantage students dealing with adverse life circumstances that are impacting their performance, thus increasing the risk level.
When it comes to relying on AI for decisions that may impact a learner’s future opportunities, we are moving into the ‘high’ and perhaps ‘unacceptable’ risk territories. Therefore, we can see that the level of risk resides not so much within the tool as within the contexts in which they are used. While human oversight may help to mitigate some of the risks, we should be aware of the danger of dependence lock-in, in which humans become increasingly dependent to AI to make decisions. All this underscores the importance of the development of Explainable AI, as discussed above. In order to ensure its responsible use in educational settings, it is important to remain ever aware of the balance that needs to be struck between leveraging AI’s benefits and evaluating and mitigating potential risks and ensuring that human oversight is included and human values are served.
- AI Watch, road to the adoption of artificial intelligence by the public sector – A handbook for policymakers, public administrations and relevant stakeholders
European Commission, Joint Research Centre, Manzoni, M., Medaglia, R., et al., Publications Office of the European Union, 2022.
Publication metadata
This handbook is published in the context of AI Watch, the European Commission’s knowledge service to monitor the development, uptake and impact of AI for Europe, which was launched in December 2018 as part of the Coordinated Plan on the Development and Use of Artificial Intelligence Made in Europe. As part of the AI Watch, a specific task addresses the role of AI for the public sector, and it is set out to provide actionable guidelines for the adoption of AI in the public sector in the EU. The public sector deserves special attention in this regard, as it differs from the private sector in a number of ways and features.
First of all, the public sector mandate is the protection and sound management of citizens and public good, and it is administered by the rule of law. Based on these two fundamental principles, public sector administrations differ from private organisations in a number of characteristics underpinning their values, determining their objectives, instruments, roles and relationships with other actors. It is therefore likely that the conditions of adoption and use of AI technology in the public sector cannot be modelled simply around those of private enterprises, and this in terms of aims, needs, operations, instruments and processes. The purpose of this handbook is to provide recommendations to policymakers and relevant stakeholders on the sensible adoption and use of AI in and by the public sector in Europe. Recommendations and actions provided in this handbook are intended to support forward-looking managers, practitioners and innovators throughout the public services delivery chain and at European, national and local governance levels. These recommendations stand to support the joint commitment taken by the European Commission, Member States and associated countries, as outlined in the Coordinated Plan on Artificial Intelligence 2021 Review (‘AI Coordinated Plan’). In its annexes, the handbook provides a mapping of the different recommendations articulated into actions and their competence at the operational level by the different stakeholders operating in this domain. The mapping of recommendations versus stakeholders are summarised in a self-explanatory table articulated around the selected areas of interventions and different operational levels.
- AI Watch – Artificial intelligence standardisation landscape update
European Commission, Joint Research Centre, Soler Garrido, J., Tolan, S., et al., Publications Office of the European Union, 2023.
The European Commission presented in April 2021 the AI Act, its proposed legislative framework for Artificial Intelligence, which sets the necessary regulatory conditions for the adoption of trustworthy AI practices in the European Union. The AI Act adopts a risk-based approach, laying down a set of legal requirements for certain AI systems, primarily those that are classified as high-risk. At the time of writing this report, the AI Act is under negotiation between the co-legislators, the European Parliament and the Council of the European Union. Once an agreement is found and the final legal text comes into force, standards will play a fundamental role in supporting providers of concerned AI systems. Standards are set to bring the necessary level of technical detail into the essential requirements prescribed in the legal text, defining concrete processes, methods and techniques that AI providers can implement in order to comply with their legal obligations.
Indeed, harmonised standards –produced by European standardisation organisations based on a formal standardisation request issued by the European Commission– provide operators with a presumption of conformity with the legal requirements of the EU harmonisation legislation in question once they are accepted and the reference of those standards is published in the Official Journal. However, the drafting of standards is an elaborate process, requiring extensive technical expertise and the coordination of multiple stakeholders. Fortunately, AI has been an active area of work by many standards development organizations in recent years, resulting in a wealth of specifications with the potential to support the future AI Act. In this report, we analyse a set of such specifications, selected from the broad range of standards and certification criteria produced by the IEEE Standards Association covering aspects of trustworthy AI. Several of the documents analysed have been found to provide highly relevant technical content from the point of view of the AI Act. Furthermore, some of them cover important standardization gaps identified in previous analyses. This work is intended to provide independent input to European and international standardisers currently planning AI standardisation activities in support of the regulatory needs. This report identifies concrete elements in IEEE standards and certification criteria that could fulfil standardisation needs emerging from the European AI Regulation proposal, and provides recommendations for their potential adoption and development in this direction.
- Artificial intelligence (AI) and human rights – Using AI as a weapon of repression and its impact on human rights – In-depth analysis
European Parliament, Directorate-General for External Policies of the Union, Ünver, A., Publications Office of the European Union, 2024.
This in-depth analysis (IDA) explores the most prominent actors, cases and techniques of algorithmic authoritarianism together with the legal, regulatory and diplomatic framework related to AI-based biases as well as deliberate misuses. With the world leaning heavily towards digital transformation, AI’s use in policy, economic and social decision-making has introduced alarming trends in repressive and authoritarian agendas. Such misuse grows ever more relevant to the European Parliament, resonating with its commitment to safeguarding human rights in the context of digital trans-formation. By shedding light on global patterns and rapidly developing technologies of algorithmic authoritarianism, this IDA aims to produce a wider understanding of the complex policy, regulatory and diplomatic challenges at the intersection of technology, democracy and human rights.
Insights into AI’s role in bolstering authoritarian tactics offer a foundation for Parliament’s advocacy and policy interventions, underscoring the urgency for a robust international framework to regulate the use of AI, whilst ensuring that technological progress does not weaken fundamental freedoms. Detailed case studies and policy recommendations serve as a strategic resource for Parliament’s initiatives: they highlight the need for vigilance and proactive measures by combining partnerships (technical assistance), industrial thriving (AI Act), influence (regulatory convergence) and strength (sanctions, export controls) to develop strategic policy approaches for countering algorithmic control encroachments.
- Artificial intelligence and algorithms in risk assessment – Addressing bias, discrimination and other legal and ethical issues – A handbook
European Labour Authority, 2023.
This ELA handbook aims to enhance the understanding of bias and related legal, ethical and practical issues that may arise in the development and utilization of algorithms and Artificial Intelligence (AI) for risk assessment. It provides insights into relevant regulations and methods to mitigate bias and prevent discrimination.
The handbook was elaborated as part of the 2023 ELA programme on addressing ethical and practical issues in the use of Artificial Intelligence and algorithms in risk assessment. This programme included an online training about addressing bias in AI and algorithms for risk assessment, and related ethical, legal and practical issues, which constitutes the basis of this handbook. This training was provided by Prof. Raphaële Xenidis and Prof. Benjamin van Giffen.
- Artificial intelligence and cybersecurity research – ENISA research and innovation Brief
European Union Agency for Cybersecurity, Ntalampiras, S., Pascu, C., et al., European Union Agency for Cybersecurity, 2023.
The aim of this study is to identify needs for research on AI for cybersecurity and on securing AI, as part of ENISA’s work in fulfilling its mandate under Article 11 of the Cybersecurity Act. This report is one of the outputs of this task. In it we present the results of the work carried out in 2021 and subsequently validated in 2022 and 2023 with stakeholders, experts and community members such as the ENISA AHWG on Artificial Intelligence. ENISA will make its contribution through the identification of five key research needs that will be shared and discussed with stakeholders as proposals for future policy and funding initiatives at the level of the EU and Member States.
- Artificial Intelligence for interoperability in the European public sector – An exploratory study
European Commission, Joint Research Centre, Tangi, L., Rodriguez Müller, A., et al., Publications Office of the European Union, 2023.
Publication metadata
This report provides the result of a research study conducted within the context of the Public Sector Tech Watch, an observatory developed by DG DIGIT, with the support of the Joint Research Centre (JRC), that provides a knowledge hub and a virtual space where public administrations, civil society, GovTech companies and researchers can find and share knowledge and experience. The report’s primary goal is to offer an analysis of how Artificial Intelligence (AI) systems are improving interoperability in the European Public Sector.
The findings are based on three pillars: (i) a literature and policy review on the synergies between AI and interoperability; (ii) a quantitative analysis of a selected set of 189 use cases fitting the purpose of the research question; and (iii) a qualitative study going deeper into some illustrative cases. The findings highlight that the one-fourth of the cases collected are using AI techniques to support interoperability through a varied set of applications. Moreover, the semantic interoperability layer is fundamental in most of the cases. In addition, ontologies and taxonomies combined with AI can help in establishing interoperability between different systems. The solutions analysed classify, detect and provide structure, among other actions performed on data. Hence, AI has the capability to standardise, clean, structure and increase the usage of large volumes of data, thus improving overall quality and making it easier to use and share between different systems.
- Artificial intelligence for the public sector
European Commission, Joint Research Centre, Farrell, E., Giubilei, M., et al., Publications Office of the European Union, 2023.
Publication metadata
The Public Sector plays different roles with regard to Artificial Intelligence (AI). First, it acts as regulator, establishing the legal framework for the use of AI within society. Second, governments play also the role of accelerator, providing funding and support for the uptake of AI. Third, public sector organisations develop and use Artificial Intelligence. To explore these roles, with particular emphasis on the latter, the Joint Research Centre (JRC) and the Directorate-General for Informatics (DIGIT) of the European Commission jointly organised a webinar series and a “science for policy” conference in 2022. This report includes the conclusions of each one of the webinars, together with the material and main findings of the closing event.
It reveals recent challenges, opportunities, and policy perspectives of the use of AI in the public sector, and distils a set of short takeaway messages. In a nutshell these finding are (i) AI in the public sector implies multi-stakeholders; (ii) experiment first, scale-up later; (iii) trustworthiness is a must; (iv) there is a need for upskilling public sector to be ready for the AI revolution; and (v) adapt procurement for digital and AI innovation. The report concludes that the AI promise is high for the society and in particular for the Public Sector, but the risks are not to be minimized. Europe has the ambition to succeed as whole in the digital transition powered by data and by AI-based applications, and wants to do it the European way, by putting citizens in the centre of this transformation. We hope that, with the results of these discussions, we have been able to contribute to this necessary debate, which is key to make this Europe’s Digital Decade.
- Bias in algorithms – Artificial intelligence and discrimination
European Union Agency for Fundamental Rights, Publications Office of the European Union, 2022.
Artificial intelligence is everywhere and affects everyone – from deciding what content people see on their social media feeds to determining who will receive state benefits. AI technologies are typically based on algorithms that make predictions to support or even fully automate decision-making. This report looks at the use of artificial intelligence in predictive policing and offensive speech detection. It demonstrates how bias in algorithms appears, can amplify over time and affect people’s lives, potentially leading to discrimination. It corroborates the need for more comprehensive and thorough assessments of algorithms in terms of bias before such algorithms are used for decision-making that can have an impact on people.
- Data quality requirements for inclusive, non-biased and trustworthy AI – Putting science into standards
European Commission, Joint Research Centre, Balahur, A., Jenet, A., et al., Publications Office of the European Union, 2022.
A decade of rapid development of artificial intelligence (AI) has resulted in a large diversity of practical applications across different sectors. Data play a fundamental role in AI systems, which can be seen as adaptive data processing algorithms that adjust outputs to input training data. This fundamental role of data is reflected in the EU policy agenda where for example guidance on handling the data is specified in the AI Act. In response to the needs of the AI Act, the Joint Research Centre, in collaboration with the European Committee for Standardisation and the European Committee for Electrotechnical Standardisation, organised the Putting Science Into Standards workshop on data quality requirements for inclusive, nonbiased, and trustworthy artificial intelligence.
The workshop took place on 8 and 9 June 2022, with more than 178 participants from 36 countries gathering for the first time European standardisation experts, legislators, scientists, and societal stakeholders to map pre-normative research and standardisation needs. The workshop highlighted existing and the need of new standards from the creation and documentation of datasets all along to data quality requirements, bias examination and mitigation of AI systems. The workshop also identified the steps needed to start the process of drafting new standards and recognised that inclusiveness and full representation of all relevant stakeholders, including industry, SMEs representatives, civil society, and academia is crucial. Building a stronger engagement of experts in AI standardisation is essential to contribute to the development of standards not only to support the market deployment of AI systems in accordance with the AI act, but also to support this growing field of research.
- Diversity in artificial intelligence conferences – An analysis of indicators for gender, country and institution diversity from 2007 to 2023
European Commission: Joint Research Centre, Gómez, E., Porcaro, L., Frau, P. and Vinagre, J., Publications Office of the European Union, 2024.
This report provides an overview of the divinAI project and provides a set of diversity indicators for seven core artificial intelligence (AI) conferences from 2007 to 2023: the International Joint Conference on Artificial Intelligence (IJCAI), the Annual Association for the Advancement of Artificial Intelligence (AAAI) Conference, the International Conference on Machine Learning (ICML), Neural Information Processing Systems (NeurIPS) Conference, the Association for Computing Machinery (ACM) Recommender Systems (RecSys) Conference, the European Conference on Artificial Intelligence (ECAI) and the European Conference on Machine Learning/Practice of Knowledge Discovery in Databases (ECML/PKDD) . We observe that, in general, Conference Diversity Index (CDI) values are still low for the selected conferences, although showing a slight temporal improvement thanks to diversity initiatives in the AI field.
We also note slight differences between conferences, being RecSys the one with higher comparative diversity indicators, followed by general AI conferences (IJCAI, ECAI and AAAI). The selected Machine Learning conferences NeurIPS and ICML seem to provide lower values for diversity indicators. Regarding the different dimensions of diversity, gender diversity reflects a low proportion of female authors in all considered conferences, even given current gender diversity efforts in the field, which is in line with the low presence of women in technological fields. In terms of country distribution, we observe a notable presence of researchers from the EU, US and China in the selected conferences, where the presence of Chinese authors has increased in the last few years. Regarding institutions, universities and research centers or institutes play a central role in the AI scientific conferences under analysis, and the presence of industry seems to be more notable in machine learning conferences. An online dashboard that allows exploration and reproducibility complements the report.
- Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators,
European Commission, Directorate-General for Education, Youth, Sport and Culture, Publications Office of the European Union, 2022.
These ethical guidelines on AI and data usage in teaching and learning are designed to help educators understand the potential that the applications of AI and data usage can have in education and to raise awareness of the possible risks so that they are able to engage positively, critically and ethically with AI systems and exploit their full potential.
- Glossary of human-centric artificial intelligence
European Commission, Joint Research Centre, Estévez Almenzar, M., Fernández Llorca, D., et al., Publications Office of the European Union, 2022.
Over the last few years, Artificial Intelligence (AI) has become a very active research topic, moving from a purely technical field to an interdisciplinary research domain and a very active topic in terms of policy developments. The European approach for AI focuses on two main areas: excellence and trust, enabling the development and uptake of AI while ensuring people's safety and fundamental rights. However, research and policy documentations do not always use the same vocabulary, often generating misunderstandings among researchers, policy makers, and the general public. Based on existing literature in the intersection between research, industry and policy, and given the expertise and know Joint Research Centre, we present here a glossary of terms on AI, with a focus on a human-centric approach, covering concepts related to trustworthy artificial intelligence such as transparency, accountability or fairness.
We have collected 230 different terms from more than 10 different general sources including standards, policy documents and legal texts, as well as multiple scientific references. Each term is accompanied by one or several definitions linked to references and complemented with our own definitions when no relevant source was found. We humbly hope that the work presented here can contribute to establishing the necessary common ground for the interdisciplinary and policy-centred debate on artificial intelligence.
- Is artificial intelligence threatening democracy?
European University Institute, Galariotis, I., European University Institute, 2024.
In a democracy, human beings make decisions with the aim of serving the will of the people and promoting the collective welfare of society. While machines can learn from data and generate potential democratic solutions, they fall short in grasping the intricacies of the subjective reality of democratic politics. Entrusting Artificial Intelligence (AI) systems with decision-making carries the risk of following optimal solutions shaped by falsified objective realities that AI algorithms aim to optimise. Even if the data were comprehensive and sufficient, modelling approaches struggle to fully encapsulate the complexities of subjective realities within global democracies and societies. In essence, leaving democratic politics to be governed by ostensibly logical AI classifiers is a significant gamble.
In the second high-level policy dialogue that took place on the 22 and 23 of May 2023 in Florence under the auspices of the STG Chair in Artificial Intelligence and Democracy, scholars and policymakers discussed and shared their ideas to map multiple available solutions for how democratic politics can live with an AI-powered world and, more than that, how AI can turn to a beneficial tool for democracy. Most of the participants agree that AI can be formed and transformed to a useful tool for democracies. In this policy brief, we summarise the key ideas that emerged from the discussions in this high-level policy dialogue.
- Law and ICT
European Parliament: Directorate-General for Internal Policies of the Union and Maciejewski, M., European Parliament, 2024.
Exponential progress in the area of ICT improves access to data and information, which in its turn can lead to greater accessibility, reduced complexity, efficiency and respect of fundamental rights in policy, law making and implementation of law. Drafting and publication of laws need to be reformed from paper based format to modern digital media. Expertise, evidence and data should constitute mandatory elements of policy and law making. Ex-post quantified evaluation of legislation needs to be applied consistently. This study was prepared by the European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs at the request of the JURI Committee.
- Mapping ERC frontier research artificial intelligence
European Commission, European Research Council Executive Agency, Publications Office of the European Union, 2024.
The European Research Council (ERC) is the premier European funding organisation for excellent frontier research. Since its establishment in 2007, it has been a cornerstone of the EU’s research and innovation funding programmes. The ERC gives its grantees the freedom to develop ambitious research projects that can lead to advances at the frontiers of knowledge and set a clear and inspirational target for frontier research across Europe. The ERC funds a rich and diverse portfolio of projects in all fields of science and scholarship, without any predefined academic or policy priorities. These projects can have an impact well beyond science and provide frontier knowledge and innovation to help solve societal challenges and inform EU policy objectives. This report aims to highlight how ERC-funded curiosity-driven research projects are developing or using Artificial Intelligence in their scientific processes, and how these projects and their outputs can help to both define and enable the implementation of policies related to AI and its cross-cutting applications
This report represents the first comprehensive analysis of the ERC’s AI portfolio and it is structured as follows: 1. Chapter one provides an overview of ERC-funded projects developing or using AI in science. 2. Chapter two focuses on their scientific landscape by offering a more detailed analysis of their evolution and distribution in ERC scientific domains, disciplines, and topics. 3. Chapter three gives an overview of their policy landscape, that is, by linking the projects to specific policy areas and providing examples that are relevant to the EU policies on AI. 4. Chapter four covers an analysis of a subset of ERC-funded AI projects that pose particularly pressing ethical, legal, and social questions surrounding the development or use of AI.
- Successful and timely uptake of artificial intelligence in science in the EU
European Commission, Directorate-General for Research and Innovation, Publications Office of the European Union, 2024.
Publication metadata
Artificial Intelligence (AI) technologies are one of the most disruptive general purpose applications at the service of research and innovation. It acts as a catalyst for scientific breakthroughs and is rapidly becoming a key instrument in the scientific process in all areas of research. In this Scientific Opinion (SO) the Group of Chief Scientific Advisors examines how the European Commission can accelerate the responsible take-up of artificial intelligence in science in the European Union.
It focuses on a responsible uptake of AI in science – including providing access to highquality AI, respecting European values, and strengthening the position of Europe in science to boost innovation and prosperity in the EU. This SO is published in the context of the Scientific Advice Mechanism which provides independent scientific evidence and policy recommendations to the European institutions by request of the College of Commissioners.
- TechDispatch – Explainable artificial intelligence
European Data Protection Supervisor, Attoresi, M., Bernardo, V., et al., Publications Office of the European Union, 2023.
The adoption of artificial intelligence (AI) is rapidly growing in sectors such as healthcare, finance, transportation, manufacturing and entertainment. Its increasing popularity in recent years is largely due to its ability to automate tasks, such as processing large amounts of information or identifying patterns, and its widespread availability to the public. Large language models (LLMs), like ChatGPT, or text-to-image models, like Stable Diffusion, are two examples of AI that have gained large popularity in recent years. However, despite the growing use of AI, many of these systems operate in ways that are opaque to both those providing AI systems (‘providers’), those deploying AI systems (‘deployers’), and those affected by the use of AI systems. In the complex realm of AI systems, even the providers of these systems are often unable to explain the decisions and outcomes of the systems they have built. This phenomenon is commonly referred to as the “black box” effect.
- Use and impact of artificial intelligence in the scientific process – Foresight
European Commission, European Research Council Executive Agency, Publications Office of the European Union, 2023.
This report highlights how ERC funded researchers are using artificial intelligence (AI) in their scientific processes, and how they see its potential impact by 2030. It summarises the findings of a foresight survey conducted among ERC grantees, which focused on their present use of AI and their views on future developments by 2030, potential opportunities and risks, and the future impact of generative AI in science, such as large language models (LLMs). Developed in collaboration with DG Research & Innovation (R&I) and its unit Science Policy, Advice & Ethics/ Scientific Advice Mechanism (SAM), this report was prepared in the context of the upcoming Scientific Opinion on the responsible uptake of AI in science. The aim is to provide evidence that can inform the development and implementation of policies related to AI in the realm of science.
The use of AI in scientific and scholarly practices remains a subject of ongoing academic and policy debates at both European and international levels (Nature 2023, OECD 2023, Birhane et al. 2023, van Dis et al. 2023). AI’s deployment spans various disciplines and serves many purposes, ranging from large-scale data processing, patterns and predictions generation, experiment design and control, as well as writing and peer-reviewing of scientific papers or grant proposals. The actual and potential effects and drawbacks of AI in these contexts are widely debated. This topic has come to the foreground of a European Commission policy initiative focusing on the impact of AI in research and innovation (R&I) (Arranz et al. 2023b). In terms of research that can inform policy-making, CORDIS Results Pack on the use of AI in science has showcased a collection of EU funded projects on the topic (including 8 ERC projects). Furthermore, an upcoming Mapping Frontier Research (MFR) report on AI from ERCEA (scheduled for release in early 2024) will bolster these efforts within the framework of its Feedback to Policy (F2P) activities, as requested by the ERC Scientific Council.
- Last Updated: Sep 26, 2024 9:19 AM
- URL: https://ec-europa-eu.libguides.com/ai-algorithms
- Print Page